music genre
Towards Assessing Data Replication in Music Generation with Music Similarity Metrics on Raw Audio
Batlle-Roca, Roser, Liao, Wei-Hisang, Serra, Xavier, Mitsufuji, Yuki, Gómez, Emilia
Recent advancements in music generation are raising multiple concerns about the implications of AI in creative music processes, current business models and impacts related to intellectual property management. A relevant discussion and related technical challenge is the potential replication and plagiarism of the training set in AI-generated music, which could lead to misuse of data and intellectual property rights violations. To tackle this issue, we present the Music Replication Assessment (MiRA) tool: a model-independent open evaluation method based on diverse audio music similarity metrics to assess data replication. We evaluate the ability of five metrics to identify exact replication by conducting a controlled replication experiment in different music genres using synthetic samples. Our results show that the proposed methodology can estimate exact data replication with a proportion higher than 10%. By introducing the MiRA tool, we intend to encourage the open evaluation of music-generative models by researchers, developers, and users concerning data replication, highlighting the importance of the ethical, social, legal, and economic consequences. Code and examples are available for reproducibility purposes.
- North America > United States > California > San Francisco County > San Francisco (0.14)
- Europe > Spain (0.04)
- Europe > Netherlands > Utrecht (0.04)
- (4 more...)
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (0.88)
- Media > Music (1.00)
- Leisure & Entertainment (1.00)
- Law (1.00)
Transfer Learning for Underrepresented Music Generation
Doosti, Anahita, Guzdial, Matthew
Combinational creativity, also sometimes combinatorial network models for music generation have arisen, trained creativity, is a type of creative problem solving on massive datasets and requiring significant computation in which two conceptual spaces are combined to represent (Civit et al. 2022). While these approaches have proven a third or new conceptual space (Boden 2009). While different successful at replicating genres of music like those in their musical genres may vary in terms of their local features training sets, due to the nature of large-scale neural network (e.g., melodies), they are all still music. As such, we models we expect this may not prove true for dissimilar genres.
- North America > Canada > Alberta > Census Division No. 11 > Edmonton Metropolitan Region > Edmonton (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- Asia > India (0.04)
- Media > Music (1.00)
- Leisure & Entertainment (1.00)
Can GAN originate new electronic dance music genres? -- Generating novel rhythm patterns using GAN with Genre Ambiguity Loss
Since the introduction of deep learning, researchers have proposed content generation systems using deep learning and proved that they are competent to generate convincing content and artistic output, including music. However, one can argue that these deep learning-based systems imitate and reproduce the patterns inherent within what humans have created, instead of generating something new and creative. This paper focuses on music generation, especially rhythm patterns of electronic dance music, and discusses if we can use deep learning to generate novel rhythms, interesting patterns not found in the training dataset. We extend the framework of Generative Adversarial Networks(GAN) and encourage it to diverge from the dataset's inherent distributions by adding additional classifiers to the framework. The paper shows that our proposed GAN can generate rhythm patterns that sound like music rhythms but do not belong to any genres in the training dataset. The source code, generated rhythm patterns, and a supplementary plugin software for a popular Digital Audio Workstation software are available on our website.
- Media > Music (1.00)
- Leisure & Entertainment (1.00)
Machine Learning for Dummies with TensorFlow.js
I was recently messing around with the new TensorFlow.js Since I can only do things with JS, I was glad to hear about this becoming available. From my brief experimentation, I have found the API to be extremely easy to use, given one has some basic Machine Learning concepts under one's belt. I devised a simple experiment which I didn't particularly expect to be fruitful, but if I was able to get a functioning model it would be a proof of concept for handling actual datasets. As I suspected, the results were bad for predicting new examples, but I still think my efforts were productive enough to be worth sharing, and I definitely learned some things along the way. My initial idea was to create a classifier for music genres, one which given a new example of a simple melody would be able to classify it as one of four: blues, pop, jazz, and metal.
Low-dimensional Embodied Semantics for Music and Language
Raposo, Francisco Afonso, de Matos, David Martins, Ribeiro, Ricardo
Embodied cognition states that semantics is encoded in the brain as firing patterns of neural circuits, which are learned according to the statistical structure of human multimodal experience. However, each human brain is idiosyncratically biased, according to its subjective experience history, making this biological semantic machinery noisy with respect to the overall semantics inherent to media artifacts, such as music and language excerpts. We propose to represent shared semantics using low-dimensional vector embeddings by jointly modeling several brains from human subjects. We show these unsupervised efficient representations outperform the original high-dimensional fMRI voxel spaces in proxy music genre and language topic classification tasks. We further show that joint modeling of several subjects increases the semantic richness of the learned latent vector spaces.
Machine learning and chord based feature engineering for genre prediction in popular Brazilian music
Wundervald, Bruna D., Zeviani, Walmes M.
Music genre can be hard to describe: many factors are involved, such as style, music technique, and historical context. Some genres even have overlapping characteristics. Looking for a better understanding of how music genres are related to musical harmonic structures, we gathered data about the music chords for thousands of popular Brazilian songs. Here, 'popular' does not only refer to the genre named MPB (Brazilian Popular Music) but to nine different genres that were considered particular to the Brazilian case. The main goals of the present work are to extract and engineer harmonically related features from chords data and to use it to classify popular Brazilian music genres towards establishing a connection between harmonic relationships and Brazilian genres. We also emphasize the generalisation of the method for obtaining the data, allowing for the replication and direct extension of this work. Our final model is a combination of multiple classification trees, also known as the random forest model. We found that features extracted from harmonic elements can satisfactorily predict music genre for the Brazilian case, as well as features obtained from the Spotify API. The variables considered in this work also give an intuition about how they relate to the genres.
- South America > Brazil (0.14)
- Europe > Austria > Vienna (0.14)
- North America > United States > New York (0.04)
- Europe > Portugal > Braga > Braga (0.04)
- Media > Music (1.00)
- Leisure & Entertainment (1.00)